Goto

Collaborating Authors

 posterior inclusion probability


Uncertainty-Aware Sparse Identification of Dynamical Systems via Bayesian Model Averaging

Kashiwamura, Shuhei, Kato, Yusuke, Kori, Hiroshi, Okada, Masato

arXiv.org Machine Learning

In many problems of data-driven modeling for dynamical systems, the governing equations are not known a priori and must be selected phenomenologically from a large set of candidate interactions and basis functions. In such situations, point estimates alone can be misleading, because multiple model components may explain the observed data comparably well, especially when the data are limited or the dynamics exhibit poor identifiability. Quantifying the uncertainty associated with model selection is therefore essential for constructing reliable dynamical models from data. In this work, we develop a Bayesian sparse identification framework for dynamical systems with coupled components, aimed at inferring both interaction structure and functional form together with principled uncertainty quantification. The proposed method combines sparse modeling with Bayesian model averaging, yielding posterior inclusion probabilities that quantify the credibility of each candidate interaction and basis component. Through numerical experiments on oscillator networks, we show that the framework accurately recovers sparse interaction structures with quantified uncertainty, including higher-order harmonic components, phase-lag effects, and multi-body interactions. We also demonstrate that, even in a phenomenological setting where the true governing equations are not contained in the assumed model class, the method can identify effective functional components with quantified uncertainty. These results highlight the importance of Bayesian uncertainty quantification in data-driven discovery of dynamical models.


94130ea17023c4837f0dcdda95034b65-AuthorFeedback.pdf

Neural Information Processing Systems

In the revision, we shall add the following table on average computational times of all the18 methodsbasedon10replications. Without the Bayesian32 machinery in the paper, we cannot extract the posterior inclusion probabilities for structure recovery using (2.5)33 and provide consequent strong guarantees for graph selection in Theorem 3. iii) For scalability, we compute the34 MAP estimator instead of sampling from the full posterior.


Mean-field Variational Bayes for Sparse Probit Regression

Fasano, Augusto, Rebaudo, Giovanni

arXiv.org Machine Learning

We consider Bayesian variable selection for binary outcomes under a probit link with a spike-and-slab prior on the regression coefficients. Motivated by the computational challenges encountered by Markov chain Monte Carlo (MCMC) samplers in high-dimensional regimes, we develop a mean-field variational Bayes approximation in which all variational factors admit closed-form updates, and the evidence lower bound is available in closed form. This, in turn, allows the development of an efficient coordinate ascent variational inference algorithm to find the optimal values of the variational parameters. The approach produces posterior inclusion probabilities and parameter estimates, enabling interpretable selection and prediction within a single framework. As shown in both simulated and real data applications, the proposed method successfully identifies the important variables and is orders of magnitude faster than MCMC, while maintaining comparable accuracy.